AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Enhanced Masked Decoding

# Enhanced Masked Decoding

Deberta Large
MIT
DeBERTa is an improved BERT model that enhances performance through a disentangled attention mechanism and an enhanced masked decoder, surpassing BERT and RoBERTa in multiple natural language understanding tasks.
Large Language Model Transformers English
D
microsoft
15.07k
16
Deberta Base
MIT
DeBERTa is an improved BERT model based on the disentangled attention mechanism and enhanced masked decoder, excelling in multiple natural language understanding tasks.
Large Language Model English
D
microsoft
298.78k
78
Deberta Xlarge
MIT
DeBERTa improves upon BERT and RoBERTa models with a disentangled attention mechanism and enhanced masked decoder, demonstrating superior performance in most natural language understanding tasks.
Large Language Model Transformers English
D
microsoft
312
2
Deberta Base
MIT
DeBERTa is an enhanced BERT decoding model based on the disentangled attention mechanism, improving upon BERT and RoBERTa models, and excels in natural language understanding tasks.
Large Language Model Transformers English
D
kamalkraj
287
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase